Depth Separation in ReLU Networks for Approximating Smooth Non-Linear Functions
نویسندگان
چکیده
We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the L1 norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.
منابع مشابه
Optimal approximation of piecewise smooth functions using deep ReLU neural networks
We study the necessary and sufficient complexity of ReLU neural networks—in terms of depth and number of weights—which is required for approximating classifier functions in an L-sense. As a model class, we consider the set E(R) of possibly discontinuous piecewise C functions f : [−1/2, 1/2] → R, where the different “smooth regions” of f are separated by C hypersurfaces. For given dimension d ≥ ...
متن کاملDepth-Width Tradeoffs in Approximating Natural Functions with Neural Networks
We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the L1 norm; and smooth non-li...
متن کاملUniversal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations
This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width wmin(d) so that ReLU nets of width wmin(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0, 1] aribitrarily well? For ReLU nets near this minimal width, what can one say ...
متن کاملError bounds for approximations with deep ReLU networks
We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently approximate smooth functions than shallow networks. In the case of approximations of 1D Lipschitz...
متن کاملApproximating Continuous Functions by ReLU Nets of Minimal Width
This article concerns the expressive power of depth in deep feed-forward neural nets with ReLU activations. Specifically, we answer the following question: for a fixed d ≥ 1, what is the minimal width w so that neural nets with ReLU activations, input dimension d, hidden layer widths at most w, and arbitrary depth can approximate any continuous function of d variables arbitrarily well. It turns...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1610.09887 شماره
صفحات -
تاریخ انتشار 2016